翻訳と辞書
Words near each other
・ Recursion termination
・ Recursion theorem
・ Recursive acronym
・ Recursive ascent parser
・ Recursive Bayesian estimation
・ Recursive categorical syntax
・ Recursive competitive equilibrium
・ Recursive data type
・ Recursive definition
・ Recursive descent parser
・ Recursive economics
・ Recursive filter
・ Recursive function
・ Recursive grammar
・ Recursive indexing
Recursive InterNetwork Architecture (RINA)
・ Recursive join
・ Recursive language
・ Recursive least squares filter
・ Recursive neural network
・ Recursive ordinal
・ Recursive partitioning
・ Recursive recycling
・ Recursive science fiction
・ Recursive self-improvement
・ Recursive set
・ Recursive transition network
・ Recursive tree
・ Recursive wave
・ Recursive XY-cut


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

Recursive InterNetwork Architecture (RINA) : ウィキペディア英語版
Recursive InterNetwork Architecture (RINA)
The Recursive InterNetwork Architecture (RINA) is a computer network architecture that unifies distributed computing and telecommunications. RINA's fundamental principle is that computer networking is just Inter-Process Communication or IPC. RINA reconstructs the overall structure of the Internet, forming a model that comprises a single repeating layer, the DIF (Distributed IPC Facility), which is the minimal set of components required to allow distributed IPC between application processes. RINA inherently supports mobility, multi-homing and Quality of Service without the need for extra mechanisms, provides a secure and programmable environment, motivates for a more competitive marketplace, and allows for a seamless adoption.
==History and motivation==
The principles behind RINA were first presented by John Day in his book ''Patterns in Network Architecture: A return to Fundamentals''.〔''Patterns in Network Architecture: A Return to Fundamentals'', John Day (2008), Prentice Hall, ISBN 978-0132252423〕 This work is a start afresh, taking into account lessons learned in the 35 years of TCP/IP’s existence, as well as the lessons of OSI’s failure and the lessons of other network technologies of the past few decades, such as CYCLADES, DECnet, and Xerox Network Systems.
From the early days of telephony to the present, the telecommunications and computing industries have evolved significantly. However, they have been following separate paths, without achieving full integration that can optimally support distributed computing; the paradigm shift from telephony to distributed applications is still not complete. Telecoms have been focusing on connecting devices, perpetuating the telephony model where devices and applications are the same. A look at the current Internet protocol suite shows many symptoms of this thinking:〔A. McKenzie, “INWG and the Conception of the Internet: An Eyewitness Account”; IEEE Annals of the History of Computing, vol. 33, no. 1, pp. 66–71, 2011〕
* The network routes data between interfaces of computers, as the public switched telephone network switched calls between phone terminals. However, it is not the source and destination ''interfaces'' that wish to communicate, but the distributed ''applications''.
* Applications have no way of expressing their desired service characteristics to the network, other than choosing a reliable (TCP) or unreliable (UDP) type of transport. The network assumes that applications are homogeneous by providing only a single quality of service.
* The network has no notion of application names, and has to use a combination of the interface address and transport layer port number to identify different applications. In other words, the network uses information on ''where'' an application is located to identify ''which'' application this is. Every time the application changes its point of attachment, it seems different to the network, greatly complicating multi-homing, mobility, and security.
Several attempts have been made to propose architectures that overcome the current Internet limitations, under the umbrella of the Future Internet research efforts. However, most proposals argue that requirements have changed, and therefore the Internet is no longer capable to cope with them. While it is true that the environment in which the technologies that support the Internet today live is very different from when they were conceived in the late 1970s, changing requirements are not the only reason behind the Internet's problems related to multihoming, mobility, security or QoS to name a few. The root of the problems may that current Internet is based on a tradition focused on keeping the original ARPANET demo working and fundamentally unchanged, as illustrated by the following paragraphs.
1972. Multi-homing not supported by the ARPANET. In 1972 the Tinker Air Force Base wanted connections to two different IMPs (Interface Message Processors, the predecessors of today's routers) for redundancy. ARPANET designers realized that they couldn't support this feature because host addresses were the addresses of the IMP port number the host was connected to (borrowing from telephony). To the ARPANET, two interfaces of the same host had different addresses, therefore it had no way of knowing that they belong to the same host. The solution was obvious: as in operating systems, a logical address space naming the nodes (hosts and routers) was required on top of the physical interface address space. However, the implementation of this solution was left for future work, and it is still not done today: “IP addresses of all types are assigned to interfaces, not to nodes”.〔R. Hinden and S. Deering. "IP Version 6 Addressing Architecture". RFC 4291 (Draft Standard), February 2006. Updated by RFCs 5952, 6052〕 As a consequence, routing table sizes are orders of magnitude bigger than they would need to be, and multi-homing and mobility are complex to achieve, requiring both special protocols and point solutions.
1978. Transmission Control Protocol (TCP) split from the Internet Protocol (IP). Initial TCP versions performed the error and flow control (current TCP) and relaying and multiplexing (IP) functions in the same protocol. In 1978 TCP was split from IP, although the two layers had the same scope. This would not be a problem if: i) the two layers were independent and ii) the two layers didn't contain repeated functions. However none of both is right: in order to operate effectively IP needs to know what TCP is doing. IP fragmentation and the workaround of MTU discovery that TCP does in order to avoid it to happen is a clear example of this issue. In fact, as early as in 1987 the networking community was well aware of the IP fragmentation problems, to the point of considering it harmful.〔C.A. Kent and J.C. Mogul. Fragmentation considered harmful. Proceedings of Frontiers in Computer Communications Technologies, ACM SIGCOMM, 1987〕 However, it was not understood as a symptom that TCP and IP were interdependent and therefore splitting it into two layers of the same scope was not a good decision.
1981. Watson's fundamental results ignored. Richard Watson in 1981 provided a fundamental theory of reliable transport,〔R. Watson. Timer-based mechanism in reliable transport protocol connection management. Computer Networks, 5:47–56, 1981〕 whereby connection management requires only timers bounded by a small factor of the Maximum Packet Lifetime (MPL). Based on this theory, Watson et al. developed the Delta-t protocol 〔R. Watson. Delta-t protocol specification. Technical Report UCID-19293, Lawrence Livermore National Laboratory, December 1981〕 in which the state of a connection at the sender and receiver can be safely removed once the connection-state timers expire without the need for explicit removal messages. And new connections are established without an explicit handshaking phase. On the other hand, TCP uses both explicit handshaking as well as more limited timer-based management of the connection’s state. Had TCP incorporated Watson's results it would be more efficient, robust and secure, eliminating the use of SYNs and FINs and therefore all the associated complexities and vulnerabilities to attack (such as SYN flood).
1983. Internetwork layer lost, the Internet ceases to be an Internet. Early in 1972 the International Network Working Group (INWG) was created to bring together the nascent network research community. One of the early tasks it accomplished was voting an international network transport protocol, which was approved in 1976.〔 A remarkable aspect is that the selected option, as well as all the other candidates, had an architecture composed of three layers of increasing scope: data link (to handle different types of physical media), network (to handle different types of networks) and internetwork (to handle a network of networks), each layer with its own addresses. When TCP/IP was introduced it ran at the internetwork layer on top of the Network Control Program and other network technologies. But when NCP was shut down, TCP/IP took the network role and the internetwork layer was lost.〔J. Day. How in the Heck Do You Lose a Layer!? 2nd IFIP International Conference of the Network of the Future, Paris, France, 2011〕 As a result, the Internet ceased to be an Internet and became a concatenation of IP networks with an end-to-end transport layer on top. A consequence of this decision is the complex routing system required today, with both intra-domain and inter-domain routing happening at the network layer 〔E.C. Rosen. Exterior Gateway Protocol (EGP). RFC 827, October 1982. Updated by RFC 904.〕 or the use of NAT, Network Address Translation, as a mechanism for allowing independent address spaces within a single network layer.
1983. First opportunity to fix addressing missed. The need for application names and distributed directories that mapped application names to internetwork addresses was well understood since mid-1970s. They were not there at the beginning since it was a major effort and there were very few applications, but they were expected to be introduced once the “host file” was automated (the host file was centrally maintained and mapped human-readable synonyms of addresses to its numeric value). However, application names were not introduced and DNS, the Domain Name System, was designed and deployed, continuing to use well-known ports to identify applications. The advent of the web and HTTP caused the need for application names, introducing URLs. However the URL format ties each application instance to a physical interface of a computer and a specific transport connection (since the URL contains the DNS name of an IP interface and TCP port number), making multi-homing and mobility very hard to achieve.
1986. Congestion collapse takes the Internet by surprise. Though the problem of congestion control in datagram networks had been known since the very beginning (there had been several publications during the 1970s and early 80s,〔D. Davies. Methods, tools and observations on flow control in packet-switched data networks. IEEE Transactions on Communications, 20(3): 546–550, 1972〕〔S. S. Lam and Y.C. Luke Lien. Congestion control of packet communication networks by input buffer limits - a simulation study. IEEE Transactions on Computers, 30(10), 1981.〕) the congestion collapse in 1986 caught the Internet by surprise. What is worse, it was decided to adopt the congestion avoidance scheme from Ethernet networks with a few modifications, but it was put in TCP. The effectiveness of a congestion control scheme is determined by the time-to-notify, i.e. reaction time. Putting congestion avoidance in TCP maximizes the value of the congestion notification delay and its variance, making it the worst place it could be. Moreover, congestion detection is implicit, causing several problems:
# congestion avoidance mechanisms are predatory: by definition they need to cause congestion to act;
# congestion avoidance mechanisms may be triggered when the network is not congested, causing a downgrade in performance.
1992. Second opportunity to fix addressing missed. In 1992 the Internet Architecture Board (IAB) produced a series of recommendations to resolve the scaling problems of the IPv4-based Internet: address space consumption and routing information explosion. Three types of solutions were proposed: introduce Classless Inter-Domain Routing (CIDR) to mitigate the problem, design the next version of IP (IPv7) based on CLNP (ConnectionLess Network Protocol) and continue the research into naming, addressing and routing.〔Internet Architecture Board. IP Version 7
*
* DRAFT 8
*
*. Draft IAB IPversion7, july 1992〕 CNLP was an OSI-based protocol that addressed nodes instead of interfaces, solving the old multi-homing problem introduced by the ARPANET, and allowing for better routing information aggregation. CIDR was introduced but the IETF didn't accept an IPv7 based on CLNP. IAB reconsidered its decision and the IPng process started, culminating with IPv6. One of the rules for IPng was not to change the semantics of the IP address, which continues to name the interface, perpetuating the multi-homing problem.〔
There are still more wrong decisions that have resulted in long-term problems for the current Internet, such as:
* In 1988 IAB recommended using the Simple Network Management Protocol (SNMP) as the initial network management protocol for the Internet to later transition to the object-oriented approach of the Common Management Information Protocol (CMIP).〔Internet Architecture Board. IAB Recommendations for the Development of Internet Network Management Standards. RFC 1052, April 1988〕 SNMP was a step backwards in network management, justified as a temporal measure while the required more sophisticated approaches were implemented, but the transition never happened.
* Since IPv6 didn't solve the multi-homing problem and naming the node was not accepted, the major theory pursued by the field is that the IP address semantics are overloaded with both identity and location information, and therefore the solution is to separate the two, leading to the work on Locator/Identifier Separation Protocol (LISP). However all approaches based on LISP have scaling problems 〔D. Meyer and D. Lewis. Architectural implications of Locator/ID separation. Draft Meyer Loc Id implications, January 2009〕 because i) it is based on a false distinction (identity vs. location) and ii) it is not routing packets to the end destination (LISP is using the locator for routing, which is an interface address; therefore the multi-homing problem is still there).〔J. Day. Why loc/id split isn’t the answer, 2008. Available online at http://rina.tssg.org/docs/LocIDSplit090309.pdf〕
* The discovery of bufferbloat due to the use of large buffers in the network. Since the beginning of the 1980s it was already known that the buffer size should be the minimal to damp out transient traffic bursts,〔L. Pouzin. Methods, tools and observations on flow control in packet-switched data networks. IEEE Transactions on Communications, 29(4): 413–426, 1981〕 but no more since buffers increase the transit delay of packets within the network.
* The inability to provide efficient solutions to security problems such as authentication, access control, integrity and confidentiality, since they were not part of the initial design. As stated in 〔D. Clark, L. Chapin, V. Cerf, R. Braden and R. Hobby. Towards the Future Internet Architecture. RFC 1287 (Informational), December 1991〕 “experience has shown that it is difficult to add security to a protocol suite unless it is built into the architecture from the beginning”.

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「Recursive InterNetwork Architecture (RINA)」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.